The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
We apply computer vision pose estimation techniques developed expressly for the data-scarce infant domain to the study of torticollis, a common condition in infants for which early identification and treatment is critical. Specifically, we use a combination of facial landmark and body joint estimation techniques designed for infants to estimate a range of geometric measures pertaining to face and upper body symmetry, drawn from an array of sources in the physical therapy and ophthalmology research literature in torticollis. We gauge performance with a range of metrics and show that the estimates of most these geometric measures are successful, yielding strong to very strong Spearman's $\rho$ correlation with ground truth values. Furthermore, we show that these estimates, derived from pose estimation neural networks designed for the infant domain, cleanly outperform estimates derived from more widely known networks designed for the adult domain
translated by 谷歌翻译
车道检测是自动驾驶中的基本模块之一。在本文中,我们采用了一种仅变压器的方法来进行车道检测,因此,它可以受益于完全视觉变压器的开发,并通过精细的 - 通过精细 - 通过精细 - 通过精细的 - 调整重量在大型数据集上进行全面训练。更重要的是,本文提出了一个名为Priorlane的新颖和一般框架,该框架用于通过引入低成本的局部先验知识来增强完全视觉变压器的分割性能。 PriorLane利用仅编码变压器来融合由预训练的分割模型与先验知识嵌入的功能融合。请注意,知识嵌入对齐(KEA)模块可通过对齐知识嵌入来提高融合性能。我们ZJLAB数据集的广泛实验表明,Prior-Lane以2.82%MIOU优于SOTA LANE检测方法,并且该代码将在以下位置发布:https:// github。 com/vincentqqb/priorlane。
translated by 谷歌翻译
双边姿势对称性是自闭症谱系障碍(ASD)的潜在风险标志物以及婴儿中先天性肌肉核核糖(CMT)的症状的关键作用,但是当前评估对称性的方法需要费力的临床专家评估。在本文中,我们开发了一个基于计算机视觉的婴儿对称评估系统,利用婴儿的3D人姿势估计。通过对人类角度和对称性评级的调查,我们的发现对我们的系统进行评估和校准,使这种评级表现出较低的评价者可靠性。为了纠正这一点,我们开发了一个贝叶斯的估计量,该估计量是从可犯错的人类评估者的概率图形模型中得出的。我们显示,在预测贝叶斯骨料标签方面,3D婴儿姿势估计模型可以在接收器工作特征曲线性能下实现68%的面积,而2D婴儿姿势估计模型仅为61%,而3D成人姿势估计模型的61%和60% ,强调了3D姿势和婴儿领域知识在评估婴儿身体对称性方面的重要性。我们的调查分析还表明,人类评分易受较高的偏见和不一致性的影响,因此,我们的最终基于3D姿势的对称评估系统是校准的,但没有直接受到贝叶斯汇总人类评分的直接监督,从而产生了更高的一致性和较低水平的水平和​​较低的水平。 LIMB间评估偏见。
translated by 谷歌翻译
大型语言模型在各种问题答案(QA)基准测试方面取得了高度的性能,但其产出的解释性仍然难以捉摸。最近建议将结构化的解释称为“综合树”,以解释和检查质量检查系统的答案。为了更好地生成此类树木,我们提出了一种称为迭代检索生成推理​​器(IRGR)的架构。我们的模型能够通过系统地生成文本前提的分步解释来解释给定的假设。 IRGR模型迭代地搜索合适的场所,一次构建单个零件步骤。与以前的方法相反,我们的方法结合了生成步骤和房屋的检索,允许模型利用中间结论,并减轻基线编码器模型的输入大小限制。我们使用IntailmentBank数据集进行实验,在该数据集中,我们在前提检索和索引树上的现有基准优于现有的基准,总体正确性增长了约300%。
translated by 谷歌翻译
婴儿运动分析是在儿童早期开发研究中具有重要意义的主题。然而,虽然人类姿势估计的应用变得越来越宽,但是在大规模成年姿势数据集上培训的模型几乎不能在估计婴幼儿姿势,因为它们的身体比率显着差异以及它们的构成的多功能性。此外,隐私和安全考虑因素阻碍了从头划痕培训强大模型所需的适当婴儿姿势数据的可用性。为了解决这个问题,本文提出(1)建立和公开发布具有小但不同实际婴儿图像的混合综合和真正的婴儿姿势(Syrip)数据集以及生成的合成婴儿姿势和(2)多级不变表示学习策略可以将知识从成人姿势和合成婴儿图像的相邻域和综合性婴儿图像转移到我们的微调域适应婴儿姿势(FIDEP)估计模型中。在我们的消融研究中,具有相同的网络结构,在SyRip数据集上培训的模型对唯一的其他公共婴儿姿势数据集接受过的培训明显改进。与具有不同复杂性的姿势估计骨干网络集成,FIDEP比这些模型的微调版本始终如一。我们最先进的暗影模型上最好的婴儿姿势估计表演者显示了93.6的平均平均精度(MAP)。
translated by 谷歌翻译
目前的高保真发电和高精度检测DeepFake图像位于臂赛中。我们认为,生产高度逼真和“检测逃避”的深度可以服务于改善未来一代深度检测能力的最终目标。在本文中,我们提出了一种简单但强大的管道,以通过执行隐式空间域陷波滤波来减少假图像的伪影图案而不会损伤图像质量。我们首先表明频域陷波滤波,尽管由于陷波滤波器所需的手动设计,我们的任务对于我们的任务是有效的,但是频域陷波过滤虽然是有效的。因此,我们诉诸基于学习的方法来重现陷波滤波效果,而是仅在空间域中。我们采用添加压倒性的空间噪声来打破周期性噪声模式和深映像滤波来重建无噪声假图像,我们将我们的方法命名为Deadnotch。深度图像过滤为嘈杂图像中的每个像素提供专用过滤器,与其DeepFake对应物相比,产生具有高保真度的滤波图像。此外,我们还使用图像的语义信息来生成对抗性引导映射,以智能地添加噪声。我们对3种代表性的最先进的深蓝进行的大规模评估(在16种DeepFakes上测试)已经证明,我们的技术显着降低了这3种假图像检测方法的准确性,平均和高度为36.79% 97.02%在最好的情况下。
translated by 谷歌翻译
Pre-trained language models for programming languages have shown a powerful ability on processing many Software Engineering (SE) tasks, e.g., program synthesis, code completion, and code search. However, it remains to be seen what is behind their success. Recent studies have examined how pre-trained models can effectively learn syntax information based on Abstract Syntax Trees. In this paper, we figure out what role the self-attention mechanism plays in understanding code syntax and semantics based on AST and static analysis. We focus on a well-known representative code model, CodeBERT, and study how it can learn code syntax and semantics by the self-attention mechanism and Masked Language Modelling (MLM) at the token level. We propose a group of probing tasks to analyze CodeBERT. Based on AST and static analysis, we establish the relationships among the code tokens. First, Our results show that CodeBERT can acquire syntax and semantics knowledge through self-attention and MLM. Second, we demonstrate that the self-attention mechanism pays more attention to dependence-relationship tokens than to other tokens. Different attention heads play different roles in learning code semantics; we show that some of them are weak at encoding code semantics. Different layers have different competencies to represent different code properties. Deep CodeBERT layers can encode the semantic information that requires some complex inference in the code context. More importantly, we show that our analysis is helpful and leverage our conclusions to improve CodeBERT. We show an alternative approach for pre-training models, which makes fully use of the current pre-training strategy, i.e, MLM, to learn code syntax and semantics, instead of combining features from different code data formats, e.g., data-flow, running-time states, and program outputs.
translated by 谷歌翻译
In this paper we present our contribution to the TSAR-2022 Shared Task on Lexical Simplification of the EMNLP 2022 Workshop on Text Simplification, Accessibility, and Readability. Our approach builds on and extends the unsupervised lexical simplification system with pretrained encoders (LSBert) system in the following ways: For the subtask of simplification candidate selection, it utilizes a RoBERTa transformer language model and expands the size of the generated candidate list. For subsequent substitution ranking, it introduces a new feature weighting scheme and adopts a candidate filtering method based on textual entailment to maximize semantic similarity between the target word and its simplification. Our best-performing system improves LSBert by 5.9% accuracy and achieves second place out of 33 ranked solutions.
translated by 谷歌翻译
State-of-the-art text simplification (TS) systems adopt end-to-end neural network models to directly generate the simplified version of the input text, and usually function as a blackbox. Moreover, TS is usually treated as an all-purpose generic task under the assumption of homogeneity, where the same simplification is suitable for all. In recent years, however, there has been increasing recognition of the need to adapt the simplification techniques to the specific needs of different target groups. In this work, we aim to advance current research on explainable and controllable TS in two ways: First, building on recently proposed work to increase the transparency of TS systems, we use a large set of (psycho-)linguistic features in combination with pre-trained language models to improve explainable complexity prediction. Second, based on the results of this preliminary task, we extend a state-of-the-art Seq2Seq TS model, ACCESS, to enable explicit control of ten attributes. The results of experiments show (1) that our approach improves the performance of state-of-the-art models for predicting explainable complexity and (2) that explicitly conditioning the Seq2Seq model on ten attributes leads to a significant improvement in performance in both within-domain and out-of-domain settings.
translated by 谷歌翻译